Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
The availability of large datasets of organism images combined with advances in artificial intelligence (AI) has significantly enhanced the study of organisms through images, unveiling biodiversity patterns and macro-evolutionary trends. However, existing machine learning (ML)-ready organism datasets have several limitations. First, these datasets often focus on species classification only, overlooking tasks involving visual traits of organisms. Second, they lack detailed visual trait annotations, like pixel-level segmentation, that are crucial for in-depth biological studies. Third, these datasets predominantly feature organisms in their natural habitats, posing challenges for aquatic species like fish, where underwater images often suffer from poor visual clarity, obscuring critical biological traits. This gap hampers the study of aquatic biodiversity patterns which is necessary for the assessment of climate change impacts, and evolutionary research on aquatic species morphology. To address this, we introduce the Fish-Visual Trait Analysis (Fish-Vista) dataset—a large, annotated collection of about 80K fish images spanning 3000 different species, supporting several challenging and biologically relevant tasks including species classification, trait identification, and trait segmentation. These images have been curated through a sophisticated data processing pipeline applied to a cumulative set of images obtained from various museum collections. Fish-Vista ensures that visual traits of images are clearly visible, and provides fine-grained labels of various visual traits present in each image. It also offers pixel-level annotations of 9 different traits for about 7000 fish images, facilitating additional trait segmentation and localization tasks. The ultimate goal of Fish-Vista is to provide a clean, carefully curated, high-resolution dataset that can serve as a foundation for accelerating biological discoveries using advances in AI. Finally, we provide a comprehensive analysis of state-of-the-art deep learning techniques on Fish-Vista.more » « lessFree, publicly-accessible full text available June 15, 2026
-
We have been successfully developing Artificial Intelligence (AI) models for automatically classifying fish species using neural networks over the last three years during the “Biology Guided Neural Network” (BGNN) project*1. We continue our efforts in another broader project, “Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning”*2. One of the main topics in the Imageomics Project is “Morphological Barcoding”. Within the Morphological Barcoding study, we are trying to build a gold standard method to identify species in different taxonomic groups based on their external morphology. This list of characters will contain, but not be limited to, landmarks, quantitative traits such as measurements of distances, areas, angles, proportions, colors, histograms, patterns, shapes, and outlines. The taxonomic groups will be limited by the data available, and we will be using fish as the topic of interest in this preliminary study. In this current study, we have focused on extracting morphological characters that are relying on anatomical features of fish, such as location of the eye, body length, and area of the head. We developed a schematic workflow to describe how we processed the data and extract the information (Fig. 1). We performed our analysis on the segmented images produced by Karpatne Team within the BGNN project (Bart et al. 2021). Segmentation analysis was performed using Artificial Neural Networks - Semantic Segmentation (Long et al. 2015); the list of segments to be detected were given as eye, head, trunk, caudal fin, pectoral fin, dorsal fin, anal fin, pelvic fin for fish. Segmented images, metadata and species lists were given as input to the workflow. During the cleaning and filtering subroutines, a subset of data was created by filtering down to the desired segmented images with corresponding metadata. In the validation step, segmented images were checked by comparing the number of specimens in the original image to the separate bounding-boxed specimen images, noting: violations in the segmentations, counts of segments, comparisons of relative positions of the segments among one another, traces of batch effect; comparisons according to their size and shape and finally based on these validation criteria each segmented image was assigned a score from 1 to 5 similar to Adobe XMP Basic namespace. The landmarks and the traits to be used in the study were extracted from the current literature, while mindful that some of the features may not be extracted successfully computationally. By using the landmark list, landmarks have been extracted by adapting the descriptions from the literature on to the segments, such as picking the left most point on the head as the tip of snout and top left point on the pelvic fin as base of the pelvic fin. These 2D vectors (coordinates), are then fine tuned by adjusting their positions to be on the outline of the fish, since most of the landmarks are located on the outline. Procrustes analysis*3 was performed to scale all of the measurements together and point clouds were generated. These vectors were stored as landmark data. Segment centroids were also treated as landmarks. Extracted landmarks were validated by comparing their relative position among each other, and then if available, compared with their manually captured position. A score was assigned based on these comparisons, similar to the segmentation validation score. Based on the trait list definitions, traits were extracted by measuring the distances between two landmarks, angles between three landmarks, areas between three or more landmarks, areas of the segments, ratios between two distances or areas and between a distance and a square rooted area and then stored as trait data. Finally these values were compared within their own species clusters for errors and whether the values were still within the bounds. Trait scores were calculated from these error calculations similar to segmentation scores aiming selecting good quality scores for further analysis such as Principal Component Analysis. Our work on extraction of features from segmented digital specimen images has shown that the accuracy of the traits such as measurements, areas, and angles depends on the accuracy of the landmarks. Accuracy of the landmarks is highly dependent on segmentation of the parts of the specimen. The landmarks that are located on the outline of the body (combination of head and trunk segments of the fish) are found to be more accurate comparing to the landmarks that represents inner features such as mouth and pectoral fin in some taxonomic groups. However, eye location is almost always accurate, since it is based on the centroid of the eye segment. In the remaining part of this study we will improve the score calculation for segments, images, landmarks and traits and calculate the accuracy of the scores by comparing the statistical results obtained by analysis of the landmark and trait data.more » « less
-
Abstract Image‐based machine learning tools are an ascendant ‘big data’ research avenue. Citizen science platforms, like iNaturalist, and museum‐led initiatives provide researchers with an abundance of data and knowledge to extract. These include extraction of metadata, species identification, and phenomic data. Ecological and evolutionary biologists are increasingly using complex, multi‐step processes on data. These processes often include machine learning techniques, often built by others, that are difficult to reuse by other members in a collaboration.We present a conceptual workflow model for machine learning applications using image data to extract biological knowledge in the emerging field of imageomics. We derive an implementation of this conceptual workflow for a specific imageomics application that adheres to FAIR principles as a formal workflow definition that allows fully automated and reproducible execution, and consists of reusable workflow components.We outline technologies and best practices for creating an automated, reusable and modular workflow, and we show how they promote the reuse of machine learning models and their adaptation for new research questions. This conceptual workflow can be adapted: it can be semi‐automated, contain different components than those presented here, or have parallel components for comparative studies.We encourage researchers—both computer scientists and biologists—to build upon this conceptual workflow that combines machine learning tools on image data to answer novel scientific questions in their respective fields.more » « less
An official website of the United States government

Full Text Available